Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
3rd International Conference on Power, Energy, Control and Transmission Systems, ICPECTS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2251394

ABSTRACT

COVID-19 has lately infected a big number of people worldwide. Medical service frameworks are strained as a result of the infection. The emergency unit, which is part of the medical services area, has experienced several challenges as a result of the low data quality offered by existing ICU clinical equipment. The Internet of Things has enhanced the capability for essential information mobility in medical services in the twenty-first century. Nonetheless, many of today's ideal models use IoT innovation to assess patients' well-being. As a result, executives lack understanding regarding the most effective method to apply such innovation to ICU clinical equipment. The IoT Based Paradigm for Medical Equipment Management Systems, a breakthrough IoT-based paradigm for successfully administering clinical hardware in ICUs, is introduced in this study. During the COVID-19 episode, IoT technology is used to boost the data stream between clinical hardware, executive frameworks, and ICUs, enabling the maximum level of openness and reasonableness in clinical equipment redistribution. IoT MEMS conceptual and functional features were painstakingly drawn. Using IoT MEMS expands the capacity and limits of emergency clinics, effectively easing COVID-19. It will also have a substantial impact on the nature of the data and will improve the partners' trust and transparency. © 2022 IEEE.

2.
1st International Conference on Artificial Intelligence and Data Science, ICAIDS 2021 ; 1673 CCIS:241-251, 2022.
Article in English | Scopus | ID: covidwho-2173804

ABSTRACT

Corona Virus Disease (COVID-19) has hit the world hard and almost every country has faced its consequences may be the population and number of people affected or economically. Crowd management is incredibly tough for big surroundings and continuous watching manually is troublesome to execute. Vaccinated people are also getting affected by the virus so it is advisable to take Public Health & Social Measures (PHSM) such as wearing a proper mask, sanitization and keeping social distancing in crowded places. The proposed paper presents a machine learning based real-time Covid alert and prevention system to ensure Covid appropriate behavior in public places and social gatherings. There are three modules under this system: (i) Real-time Face mask detection, where persons with masks, improper masks or no mask are detected and classified;(ii) Real-time people counting for ensuring a limit on public meetings and social gatherings and (iii) Real-time social distance monitoring. All these modules are integrated and deployed on embedded hardware, NVidia's Jetson Nano. The implementation results are presented and analysis of the detection is done in real-time on the edge-AI platform. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
2022 International Research Conference on Smart Computing and Systems Engineering, SCSE 2022 ; : 83-87, 2022.
Article in English | Scopus | ID: covidwho-2120528

ABSTRACT

COVID-19 pandemic has affected the human lifestyle in an unprecedented way. Apart from vaccination, one of the best precautionary measures is wearing masks in public. Face Mask detection models that are deployable on single board computers (SBC) enable data security, low latency, and low cost in the real-world deployment of face mask detection systems. Offline deployment is possible on SBC as data is referenced on the device compared to a server implementation while securing monitored individuals' privacy. Thus, this paper aims to implement an autonomous vision system that is deployed on Raspberry Pi devices to detect face masks in real-time. Performances of MobileNet, MobileNetV2, and EfficientNet Convolutional Neural Network (CNN) architectures were compared in both standard hardware and edge devices. We used the TensorflowLite format to compress the model for deployment. Accuracy, precision, and recall were used as metrics to compare the model performance. MobileNet achieved the overall best test accuracy of 97.93% while MobilNetV2 attained 96.12% ranking second. Each model's average inference times for standard hardware and a raspberry pi 4 device were measured by connecting to a camera feed. MobileNetV2 outperformed the other two models in inference time on the Raspberry pi device. © 2022 IEEE.

4.
129th ASEE Annual Conference and Exposition: Excellence Through Diversity, ASEE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2045146

ABSTRACT

This paper describes a novel project-oriented system on chip (SoC) design course. The course is taught in the Computer Science and Engineering (CSE) Department at the University of Texas at Arlington and is offered as CSE 4356 System on Chip Design for computer engineering undergraduates, as CSE 5356 for computer engineering graduate students, and as EE 5315 for electrical engineering graduate students. It is taught as one course combining all numbers. All students are given the same lectures, course materials, assignments, and projects. Grading standards and expectations are the same for all students as well. The course in its current form was first offered in fall 2020 and was taught online due to COVID-19 restrictions. The course was offered again in fall 2021 in a traditional on-campus, in-person mode of delivery. Two seasoned educators, with more than eighty years of total teaching experience, combined to team teach the course. One also brought more than thirty years of industrial design experience to the course. SoC FPGA devices have been available for use by designers for more than 10 years and are widely used in applications that require both an embedded microcomputer and FPGA-based logic for real-time computationally-intense solutions. Such solutions require skills in C programming, HDL programming, bus topologies forming the bridge between FPGA fabric and the microprocessor space, Linux operating systems and virtualization, and kernel device driver development. The breadth of the skills that were conveyed to students necessitated a team teaching approach to leverage the diverse background of the instructors. With such a wide range of topics, one of the biggest challenges was developing a course that was approachable for a greatly varied population of students - a mix of Computer Engineering (CpE) and Electrical Engineering (EE) students at both the graduate and undergraduate level. Another, perhaps less obvious, challenge was the inherently application focus of the course, which presents challenges to many graduate students whose undergraduate degree lacked a robust hands-on design experience. Selection of an appropriate project was key to making the course effective and providing a fun learning experience for students. The projects were aligned to relevant industry applications, stressing complex modern intellectual property (IP) work flows, while still being approachable to students. The design of a universal asynchronous receiver transmitter (UART) IP module in 2020 and a serial peripheral interface (SPI) IP module in 2021 were chosen as the projects for the first two offerings of the course. The Terasic/Intel DE1-SoC development board and Intel Quartus Prime 18.1 design software were the technologies chosen for the course. The development board and basic test instruments were provided to each student in a take-home lab kit. The system on chip design course has proven to be a popular but challenging course for our undergraduate and graduate students in computer engineering and electrical engineering. The course has demonstrated that it is possible to successfully teach an advanced design-oriented course to students of varying majors, levels, educational backgrounds, and cultures. © American Society for Engineering Education, 2022.

5.
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022 ; : 196-205, 2022.
Article in English | Scopus | ID: covidwho-2018897

ABSTRACT

Selective sweep detection carries theoretical significance and has several practical implications, from explaining the adaptive evolution of a species in an environment to understanding the emergence of viruses from animals, such as SARS-CoV-2, and their transmission from human to human. The plethora of available genomic data for population genetic analyses, however, poses various computational challenges to existing methods and tools, leading to prohibitively long analysis times. In this work, we accelerate LD (Linkage Disequilibrium) - based selective sweep detection using GPUs and FPGAs on personal computers and datacenter infrastructures. LD has been previously efficiently accelerated with both GPUs and FPGAs. However, LD alone cannot serve as an indicator of selective sweeps. Here, we complement previous research with dedicated accelerators for the ω statistic, which is a direct indicator of a selective sweep. We evaluate performance of our accelerator solutions for computing the w statistic and for a complete sweep detection method, as implemented by the open-source software OmegaPlus. In comparison with a single CPU core, the FPGA accelerator delivers up to 57.1× and 61.7× faster computation of the ω statistic and the complete sweep detection analysis, respectively. The respective attained speedups by the GPU-accelerated version of OmegaPlus are 2.9× and 12.9×. The GPU-accelerated implementation is available for download here: https://github.com/MrKzn/omegaplus.git. © 2022 IEEE.

6.
57th International Scientific Conference on Information, Communication and Energy Systems and Technologies, ICEST 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018824

ABSTRACT

This paper proposes pandemic support system design exercises from both hardware and software perspective as constituent part of higher education computer science courses. Two case studies in context of computer science and automation study programmes at University of Niš, Faculty of Electronic Engineering in Serbia ae covered: Intelligent Information Systems and Microcontroller Programming. In case of the first one, the topics cover information system implementation relying on Java Enterprise Edition (JEE) technology with presence of machine learning elements provided by Weka API, so smart vaccination process support information system is presented as example. On the other side, the focus on the second course is on PIC16 family microcontrollers and RTOS-based system implementation using CCS C compiler and presented example represents control unit for indoor coronavirus safety control. © 2022 IEEE.

7.
31st Annual Conference of the European Association for Education in Electrical and Information Engineering, EAEEIE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1973456

ABSTRACT

The COVID-19 pandemic has forced educational institutes worldwide to rapidly adapt to a remote-only learning environment. This poses significant challenges for laboratories involving hands-on training and interaction with specialized equipment, such as those of physical computing courses. In this context, we present a simple, low-cost solution for conducting remote laboratories in microcontroller and PLC programming, by adapting existing lab hardware and extending it to function remotely. A key element of the proposed setup is a physical input simulator board, which allows remote users to issue inputs (digital and/or analog signals) to the development hardware, through a graphical user interface application running on the workstation PC. The latter is also equipped with a camera and a light, so as to form a complete remote access and monitoring solution. This setup was used during the coronavirus lockdown in two courses of the Dept. of Electrical and Computer Engineering of the Hellenic Mediterranean University, receiving a very positive evaluation from the students. The proposed setup can also be used as a hybrid solution for laboratories during normal conditions, and to facilitate efficient use of teaching resources by allowing 24/7 access to the laboratory units. © 2022 IEEE.

8.
27th ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2022 ; 1:145-150, 2022.
Article in English | Scopus | ID: covidwho-1962407

ABSTRACT

The rapid transition to online education due to the COVID-19 pandemic left many instructors needing to redesign their course projects as students no longer had access to physical hardware. This paper describes the development of an open-source containerized RISC-V based game console emulator that replaced physical hardware for use in course projects. The tool was initially designed and used in a graduate operating systems course and then subsequently used in a lower division computer organization and machine-dependent programming course. The container provides a full toolchain with gcc compiler, RISC-V game console emulator with integrated debugger, example program, and input recording/auto-run tool designed for auto-grading. The use of a container reduced the barrier to entry for the students allowing them to get up and running in a relatively short period of time. Given the successful deployment of the tool in the previous courses, the tool was used both again in the lower division course and in the upper division undergraduate operating systems course this past fall. © 2022 Owner/Author.

9.
11th Mediterranean Conference on Embedded Computing, MECO 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1948826

ABSTRACT

Like almost in every field, application and design of IoT is also important in mechanical enginering for construction of automated machines, production lines and process plants. Education of engineers has to follow this trend. A new course on this field was established and had to be organised online according to Covid pandemic restrictions. The challenge to enable online programming exercises on an embedded system, where students at home control some process hardware over the internet, had to be solved. In this work, a solution is presented. Three communication paths were used, one was an office communication platform for creating a virtual classroom. Furthermore a server providing command line interfaces for communication of the programmers with the target hardware was established. Finally, an external MQTT broker was used for data exchange of the controller hardware with an application running on the PC. A low-cost hardware was contructed to provide a realistic thermal process. About twenty students completed the course with good results. © 2022 IEEE.

10.
IEEE Design and Test ; 39(4):5-6, 2022.
Article in English | Scopus | ID: covidwho-1948823

ABSTRACT

Hardware is the foundation of many systems ranging from embedded systems, and Internet of Things devices, to cyber-physical systems. The increasing design complexity of hardware continues to challenge our ability to provide robust security guarantees, thereby undermining the security of systems and resulting in security breaches and leakage of private information. The direct and indirect costs of addressing security vulnerabilities (e.g., root cause analysis, deploying fixes and mitigations, and risk of product recalls) not only damage the reputation of a company, but also prolong time-to-market deadlines, thereby squeezing the supply chain. To this end, researchers from academia and industry have been developing tools and techniques that can help identify and mitigate security issues in hardware, thereby building a bedrock for system security. One important task toward this ambitious goal is to identify the best set of attack and defense tools and techniques in hardware and embedded security, which typically spans many communities ranging from devices to circuits to architecture to CAD to cryptography. This special issue presents the articles selected during the third edition of the workshop 'Top Picks in Hardware and Embedded Security' (shortly, Top Picks) held virtually (due to COVID-19) on November 5, 2021, 'co-located' with the IEEE/ACM International Conference on Computer- Aided Design. © 2013 IEEE.

11.
14th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2022 Held as Part of the 24th HCI International Conference, HCII 2022 ; 13317 LNCS:410-422, 2022.
Article in English | Scopus | ID: covidwho-1919658

ABSTRACT

The recent pandemic of COVID-19 is placing smokers at a high risk of death as a result of the combination of smoking and COVID-19. This signals a need to address this problem among dual users (cigarette and vape users) and provides them with successful tools to quit tobacco. This pilot project aims to test a novel tool, a Virtual Reality and Motivational Interviewing combined approach that will assist dual users to quit tobacco products. The investigators wanted to pilot test the equipment and scenario for user-friendliness and interface. For the first phase of the pilot, we developed four Virtual Reality scenarios that contain different triggers for smoking, such as noise, stress, and cigarettes. We used Oculus Quest 2 for the hardware because the equipment does not require towers or connections to computers, operates utilizing WIFI, and is mobile. To develop the Software, we used the “Unity3D” game engine. A total of 21 participants tested the equipment and scenarios. The participants ranged between ages 18–71 with various gaming and virtual reality experience. The majority of the participants felt immersed in the Virtual reality environment. Some participants had some challenges with the equipment and the Software and provided valuable feedback to enhance the scenarios. The virtual reality environment promises to be a novel tool to assist tobacco users, mainly dual users, in quitting tobacco. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

12.
International Journal of Parallel, Emergent and Distributed Systems ; 2022.
Article in English | Scopus | ID: covidwho-1900955

ABSTRACT

Field programmable gate arrays (FPGAs) have become widely prevalent in recent years as a great alternative to application-specific integrated circuits (ASIC) and as a potentially cheap alternative to expensive graphics processing units (GPUs). Introduced as a prototyping solution for ASIC, FPGAs are now widely popular in applications such as artificial intelligence (AI) and machine learning (ML) models that require processing data rapidly. As a relatively low-cost option to GPUs, FPGAs have the advantage of being reprogrammed to be used in almost any data-driven application. In this work, we propose an easily scalable and cost-effective cluster-based co-processing system using FPGAs for ML and AI applications that is easily reconfigured to the requirements of each user application. The aim is to introduce a clustering system of FPGA boards to improve the efficiency of the training component of machine learning algorithms. Our proposed configuration provides an opportunity to utilise relatively inexpensive FPGA development boards to produce a cluster without expert knowledge in VHDL, Verilog, or the system designs related to FPGA development. Consisting of two parts–a computer-based host application to control the cluster and an FPGA cluster connected through a high-speed Ethernet switch, allows the users to customise and adapt the system without much effort. The methods proposed in this paper provide the ability to utilise any FPGA board with an Ethernet port to be used as a part of the cluster and unboundedly scaled. To demonstrate the effectiveness of the proposed work, a two-part experiment to demonstrate the flexibility and portability of the proposed work–a homogeneous and heterogeneous cluster, was conducted with results compared against a desktop computer and combinations of FPGAs in two clusters. Data sets ranging from 60,000 to 14 million, including stroke prediction and covid-19, were used in conducting the experiments. Results suggest that the proposed system in this work performs close to 70% faster than a traditional computer with similar accuracy rates. © 2022 Informa UK Limited, trading as Taylor & Francis Group.

13.
12th ACM Conference on Data and Application Security and Privacy, CODASPY 2022 ; : 4-15, 2022.
Article in English | Scopus | ID: covidwho-1874739

ABSTRACT

Machine learning models based on Deep Neural Networks (DNNs) are increasingly deployed in a wide variety of applications, ranging from self-driving cars to COVID-19 diagnosis. To support the computational power necessary to train a DNN, cloud environments with dedicated Graphical Processing Unit (GPU) hardware support have emerged as critical infrastructure. However, there are many integrity challenges associated with outsourcing the computation to use GPU power, due to its inherent lack of safeguards to ensure computational integrity. Various approaches have been developed to address these challenges, building on trusted execution environments (TEE). Yet, no existing approach scales up to support realistic integrity-preserving DNN model training for heavy workloads (e.g., deep architectures and millions of training examples) without sustaining a significant performance hit. To mitigate the running time difference between pure TEE (i.e., full integrity) and pure GPU (i.e., no integrity), we combine random verification of selected computation steps with systematic adjustments of DNN hyperparameters (e.g., a narrow gradient clipping range), which limits the attacker's ability to shift the model parameters arbitrarily. Experimental analysis shows that the new approach can achieve a 2X to 20X performance improvement over a pure TEE-based solution while guaranteeing an extremely high probability of integrity (e.g., 0.999) with respect to state-of-the-art DNN backdoor attacks. © 2022 ACM.

14.
2022 Systems of Signals Generating and Processing in the Field of on Board Communications, SOSG 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1806933

ABSTRACT

Information and communication technologies in the context of the Industrial Revolution 4.0 are used in all industries. In the context of COVID-19 pandemic and the shift to distance learning, computer hardware and software systems are an effective tool for the successful implementation of the training process in the universities. Virtual laboratories, simulators, test computer systems are the components of the virtual educational environment of the universities. Today these tools contain a specific multimedia element - a virtual robot assistant. Student's interaction with a virtual assistant should be comfortable and provide the successful educational process. The authors examined several aspects of effective student's interaction with the multimedia components of the interface of the computerized learning environment. The results of the conducted experiment can be used by developers of computer virtual laboratories, simulators, experimental research stands, test computer systems, a component of which is a virtual robot assistant, to optimize the processes of educational activities distance learning included. © 2022 IEEE.

15.
IEEE Transactions on Learning Technologies ; 2022.
Article in English | Scopus | ID: covidwho-1788793

ABSTRACT

Understanding the architecture of a processor can be uninteresting and deterring for Computer Science students, since low level details of computer architecture are often perceived to lack real-world impact. These courses typically have a strong practical component where students learn the fundamentals of the computer architecture and the handling of I/O operations through the development of simple programs in a low-level assembly programming language. Since these practical sessions require a strong involvement, student attendance and withdrawal rates are poor, what lowers academic results and introduces a negative feedback loop that preconditions students to dislike them. This paper introduces a new methodology for the practical sessions of Computer Organization and Design courses. This methodology disavows the use of simulators and focuses on actual hardware to promote a feeling of proximity to the execution and outcome of the programs. The proposed setup uses Raspberry Pi devices to encourage students to work autonomously, due to their low cost, capability of running an OS, and rich ecosystem of simple hardware devices. The setup is completed with RISC OS, which combines a simple window-based graphical interface with a low-level management of the hardware without requiring software ion layers. The work presents the methodology and the UCDebug tool, developed to help students debug their codes in RISC OS. After the introduction of the new setup at the University of Cantabria, academic results and student satisfaction have improved. The setup has also allowed to sustain a similar organization of the courses throughout the COVID-19 pandemic. IEEE

16.
2nd International Conference on Computing and Information Technology, ICCIT 2022 ; : 278-284, 2022.
Article in English | Scopus | ID: covidwho-1769608

ABSTRACT

The demand for energy sources such as electricity is increasing as the population is increasing, which results in high billing costs and more energy consumption. More factors are resulting from these issues. For example, the decreased awareness from residents about how to save energy, especially kids and elderly people who forget about turning off home appliances and lights when they are not needed to be on. HARMS provide a smart solution through the concept of machine learning (ML) and recommendations, it will monitor power consumption, show recommendations and control home appliances based on the resident's behaviors, when they are willing to turn on the room light or any other home appliance and when to turn them off in order to enhance energy saving. HARMS will also track the inhabitant's usual and unusual behavior to take an action. We must note that due to this exceptional situation (Covid-19 Pandemic), HARMS may be done either using actual hardware, simulation, or both. The hardware parts will consist of microcomputer, motion, light, and current transformer sensors. The software parts will consist of a control system that collects data from sensors and monitors the power consumption, a database to store the collected data, appropriate algorithms for the recommender system, and an android application to interact with the residents. Regarding the simulation will consist of a web-based application to represent the home environment and the appliances, including the control and the recommender systems. This project will experiment at the College of Computer Sciences and Information Technology (CCSIT) at King Faisal University (KFU). © 2022 IEEE.

17.
2021 IEEE International Conference on Engineering, Technology and Education, TALE 2021 ; : 42-47, 2021.
Article in English | Scopus | ID: covidwho-1741274

ABSTRACT

Domain-Specific Architectures (DSAs) and hardware-software co-design are greatly emphasized in the CS community, which demands a significant number of participants with Computer System (CSys) capabilities and skills. Conventional CSys courses in a lecture-lab format are limited in physical resources and inherently difficult to cultivate talents at a large scale. Online teaching is a potential alternative to instantly enlarge the face-to-face class size. Unfortunately, simply putting the lecture contents in CSys courses online lacks 1) personal attention, 2) learner-instructor interactions, and 3) real-hardware experimental environments. To tackle the above challenges, we introduce a four phase online CSys course program and the related teaching methods for a cloud-based teaching platform. The four-phase course program included two basic/required stages and two advanced/optional stages to promote students' knowledge and skill level with appropriate personal attention. We studied if online interaction methods, such as in-class chat and one-on-one online grading interview, can strengthen the connections between teachers and students in both lectures and labs. We created a heterogeneous cloud platform to enable students nationwide to reliably conduct labs or projects on remote programmable hardware. We believe that our proposed course design methodology is beneficial to other CScourses in the post-COVID-19-era. © 2021 IEEE.

18.
IEEE Access ; 2022.
Article in English | Scopus | ID: covidwho-1709346

ABSTRACT

In this study, a new blockchain protocol and a novel architecture that integrates the advantages offered by edge computing, artificial intelligence (AI), IoT end-devices, and blockchain were designed, developed, and validated. This new architecture has the ability to monitor the environment, collect data, analyze it, process it using an AI-expert engine, provide predictions and actionable outcomes, and finally share it on a public blockchain platform. For the use-case implementation, the pandemic caused by the wide and rapid spread of the novel coronavirus COVID-19 was used to test and evaluate the proposed system. Recently, various authors have traced the virus spread in sewage water and studied how it can be used as a tracking system. Early warning notifications can allow governments and organizations to take appropriate actions at the earliest stages possible. The system was validated experimentally using 14 Raspberry Pis, and the results and analyses proved that the system is able to utilize low-cost and low-power flexible IoT hardware at the processing layer to detect COVID-19 and predict its spread using the AI engine, with an accuracy of 95%, and share the outcome over the blockchain platform. This is accomplished when the platform is secured by the honesty-based distributed proof of authority (HDPoA) and without any substantial impact on the devices’power sources, as there was only a power consumption increase of 7% when the Raspberry Pi is used for blockchain mining and 14% when used to produce an AI prediction. Author

19.
4th International Conference on Computer and Informatics Engineering, IC2IE 2021 ; : 175-180, 2021.
Article in English | Scopus | ID: covidwho-1705763

ABSTRACT

Practical learning on computer maintenance for vocational students during the COVID-19 pandemic cannot be implemented optimally. Hardware access problems that are difficult to reach by students cause the learning barriers in this subject to be even greater. Therefore, through this research, a mobile-based learning media was developed to visualize computer component devices with AR technology and 3D animation. The research method used is the ADDIE model. The research begins by diagnosing the problem, describing needs, and finding appropriate solutions for computer maintenance learning. Next, the product design process and user journey are carried out and then develop applications with 3D animation assets and learning materials. Implementation activities go through an evaluation process to media and content experts to determine the validity of the application. The media and content validation instrument consists of 4 aspects with 46 items. This media is equipped with 3D objects that can be used to help students observe computer hardware. Media validation got a value of 79.49% and was included in the valid criteria so that it could be used in learning. Content validation is in the valid category with a value of 80.2%. Several improvements were made to increase the usability and attractiveness of the media so that students' interest in using the media increased. In the future this media can be applied in learning so that it can be seen the impact, both on learning outcomes, student interest and critical thinking on computer troubleshooting. © 2021 IEEE.

20.
2021 ASEE Virtual Annual Conference, ASEE 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1696339

ABSTRACT

During the Spring 2020 semester at Old Dominion University (ODU), a completely online mode of instruction was adopted to arrest the Coronavirus Disease 2019 (COVID-19) outbreak. As a consequence, each unit within the university was practically required to make its own arrangement to ensure the students and faculty were well equipped to smoothly transition to the new mode of instruction and at the same time, ensure student success in the program. The Department of Mechanical and Aerospace Engineering (MAE) at ODU responded with an effective strategy to equip its faculty with a cost-effective, timely solution to transition to the new mode of instruction. As part of MAE department's strategy to equip its faculty with low-cost, simple-to-use equipment, a minimalistic hardware setup was identified, which would consist of - (i) a camera/webcam, (ii) a microphone, and (iii) an adjustable webcam stand. The MAE department bought several Logitech webcams, which came with a built-in microphone, and flexible gooseneck camera stands with C-clamp desk mounts. A simple assembly of this setup connected to a computer was used to - (i) pre-record lectures and (ii) conduct live sessions. For seamless recording, it was recommended that the Zoom application - a video conferencing, web conferencing, webinar hosting, screen sharing computer software - be installed on the computer used for online instruction. An elaborate user manual was prepared for using the hardware setup along with the Zoom application for online instruction. This article discusses elements of the cost-effective, timely solution adopted by the MAE department at ODU. It describes the implementation of a completely online flipped-style classroom instruction using a low-cost, simple-to-use equipment. To assess the effectiveness of the online flipped-style classroom instruction, the article presents the results of a survey conducted among the students of a MAE course. © American Society for Engineering Education, 2021

SELECTION OF CITATIONS
SEARCH DETAIL